Search results for: fuzzy semantic annotation
471 Assessing the Walkability and Urban Design Qualities of Campus Streets
Authors: Zhehao Zhang
Abstract:
Walking has become an indispensable and sustainable way of travel for college students in their daily lives; campus street is an important carrier for students to walk and take part in a variety of activities, improving the walkability of campus streets plays an important role in optimizing the quality of campus space environment, promoting the campus walking system and inducing multiple walking behaviors. The purpose of this paper is to explore the effect of campus layout, facility distribution, and location site selection on the walkability of campus streets, and assess the street design qualities from the elements of imageability, enclosure, complexity, transparency, and human scale, and further examines the relationship between street-level urban design perceptual qualities and walkability and its effect on walking behavior in the campus. Taking Tianjin University as the research object, this paper uses the optimized walk score method based on walking frequency, variety, and distance to evaluate the walkability of streets from a macro perspective and measures the urban design qualities in terms of the calculation of street physical environment characteristics, as well as uses behavior annotation and street image data to establish temporal and spatial behavior database to analyze walking activity from the microscopic view. In addition, based on the conclusions, the improvement and design strategy will be presented from the aspects of the built walking environment, street vitality, and walking behavior.Keywords: walkability, streetscapes, pedestrian activity, walk score
Procedia PDF Downloads 143470 An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template
Authors: Zhu-Qing Jia, Tao Lin, Tong Zhou
Abstract:
The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons.Keywords: optical character recognition, fuzzy page identification, mutual correlation matrix, confidence self-adaptation
Procedia PDF Downloads 537469 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring
Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau
Abstract:
The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems
Procedia PDF Downloads 198468 Exploring Socio-Economic Barriers of Green Entrepreneurship in Iran and Their Interactions Using Interpretive Structural Modeling
Authors: Younis Jabarzadeh, Rahim Sarvari, Negar Ahmadi Alghalandis
Abstract:
Entrepreneurship at both individual and organizational level is one of the most driving forces in economic development and leads to growth and competition, job generation and social development. Especially in developing countries, the role of entrepreneurship in economic and social prosperity is more emphasized. But the effect of global economic development on the environment is undeniable, especially in negative ways, and there is a need to rethink current business models and the way entrepreneurs act to introduce new businesses to address and embed environmental issues in order to achieve sustainable development. In this paper, green or sustainable entrepreneurship is addressed in Iran to identify challenges and barriers entrepreneurs in the economic and social sectors face in developing green business solutions. Sustainable or green entrepreneurship has been gaining interest among scholars in recent years and addressing its challenges and barriers need much more attention to fill the gap in the literature and facilitate the way those entrepreneurs are pursuing. This research comprised of two main phases: qualitative and quantitative. At qualitative phase, after a thorough literature review, fuzzy Delphi method is utilized to verify those challenges and barriers by gathering a panel of experts and surveying them. In this phase, several other contextually related factors were added to the list of identified barriers and challenges mentioned in the literature. Then, at the quantitative phase, Interpretive Structural Modeling is applied to construct a network of interactions among those barriers identified at the previous phase. Again, a panel of subject matter experts comprised of academic and industry experts was surveyed. The results of this study can be used by policymakers in both the public and industry sector, to introduce more systematic solutions to eliminate those barriers and help entrepreneurs overcome challenges of sustainable entrepreneurship. It also contributes to the literature as the first research in this type which deals with the barriers of sustainable entrepreneurship and explores their interaction.Keywords: green entrepreneurship, barriers, fuzzy Delphi method, interpretive structural modeling
Procedia PDF Downloads 164467 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 127466 A Hebbian Neural Network Model of the Stroop Effect
Authors: Vadim Kulikov
Abstract:
The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop
Procedia PDF Downloads 264465 Value Engineering Change Proposal Application in Construction of Road-Building Projects
Authors: Mohammad Mahdi Hajiali
Abstract:
Many of construction projects estimated in Iran have been influenced by the limitations of financial resources. As for Iran, a country that is developing, and to follow this development-oriented approach which many numbers of projects each year run in, if we can reduce the cost of projects by applying a method we will help greatly to minimize the cost of major construction projects and therefore projects will finish faster and more efficiently. One of the components of transportation infrastructure are roads that are considered to have a considerable share of the country budget. In addition, major budget of the related ministry is spending to repair, improve and maintain roads. Value Engineering is a simple and powerful methodology over the past six decades that has been successful in reducing the cost of many projects. Specific solution for using value engineering in the stage of project implementation is called value engineering change proposal (VECP). It was tried in this research to apply VECP in one of the road-building projects in Iran in order to enhance the value of this kind of projects and reduce their cost. In this case study after applying VECP, an idea was raised. It was about use of concrete pavement instead of hot mixed asphalt (HMA) and also using fiber in order to improve concrete pavement performance. VE group team made a decision that for choosing the best alternatives, get expert’s opinions in pavement systems and use Fuzzy TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) for ranking opinions of the experts. Finally, Jointed Plain Concrete Pavement (JPCP) was selected. Group also experimented concrete samples with available fibers in Iran and the results of experiments showed a significant increment in concrete specifications such as flexural strength. In the end, it was shown that by using of fiber-reinforced concrete pavement instead of asphalt pavement, we can achieve a significant saving in cost, time and also increment in quality, durability, and longevity.Keywords: road-building projects, value engineering change proposal (VECP), Jointed Plain Concrete Pavement (JPCP), Fuzzy TOPSIS, fiber-reinforced concrete
Procedia PDF Downloads 195464 Metabolome-based Profiling of African Baobab Fruit (Adansonia Digitata L.) Using a Multiplex Approach of MS and NMR Techniques in Relation to Its Biological Activity
Authors: Marwa T. Badawy, Alaa F. Bakr, Nesrine Hegazi, Mohamed A. Farag, Ahmed Abdellatif
Abstract:
Diabetes Mellitus (DM) is a chronic disease affecting a large population worldwide. Africa is rich in native medicinal plants with myriad health benefits, though less explored towards the development of specific drug therapy as in diabetes. This study aims to determine the in vivo antidiabetic potential of the well-reported and traditionally used fruits of Baobab (Adansonia digitata L.) using STZ induced diabetic model. The in-vitro cytotoxic and antioxidant properties were examined using MTT assay on L-929 fibroblast cells and DPPH antioxidant assays, respectively. The extract showed minimal cytotoxicity with an IC50 value of 105.7 µg/mL. Histopathological and immunohistochemical investigations showed the hepatoprotective and the renoprotective effects of A. digitata fruits’ extract, implying its protective effects against diabetes complications. These findings were further supported by biochemical assays, which showed that i.p., injection of a low dose (150 mg/kg) of A. digitata twice a week lowered the fasting blood glucose levels, lipid profile, hepatic and renal markers. For a comprehensive overview of extract metabolites composition, ultrahigh performance (UHPLC) analysis coupled to high-resolution tandem mass spectrometry (HRMS/MS) in synchronization with molecular networks led to the annotation of 77 metabolites, among which 50% are reported for the first time in A. digitata fruits.Keywords: adansonia digital, diabetes mellitus, metabolomics, streptozotocin, Sprague, dawley rats
Procedia PDF Downloads 163463 Genome-Wide Analysis of Long Terminal Repeat (LTR) Retrotransposons in Rabbit (Oryctolagus cuniculus)
Authors: Zeeshan Khan, Faisal Nouroz, Shumaila Noureen
Abstract:
European or common rabbit (Oryctolagus cuniculus) belongs to class Mammalia, order Lagomorpha of family Leporidae. They are distributed worldwide and are native to Europe (France, Spain and Portugal) and Africa (Morocco and Algeria). LTR retrotransposons are major Class I mobile genetic elements of eukaryotic genomes and play a crucial role in genome expansion, evolution and diversification. They were mostly annotated in various genomes by conventional approaches of homology searches, which restricted the annotation of novel elements. Present work involved de novo identification of LTR retrotransposons by LTR_FINDER in haploid genome of rabbit (2247.74 Mb) distributed in 22 chromosomes, of which 7,933 putative full-length or partial copies were identified containing 69.38 Mb of elements, accounting 3.08% of the genome. Highest copy numbers (731) were found on chromosome 7, followed by chromosome 12 (705), while the lowest copy numbers (27) were detected in chromosome 19 with no elements identified from chromosome 21 due to partially sequenced chromosome, unidentified nucleotides (N) and repeated simple sequence repeats (SSRs). The identified elements ranged in sizes from 1.2 - 25.8 Kb with average sizes between 2-10 Kb. Highest percentage (4.77%) of elements was found in chromosome 15, while lowest (0.55%) in chromosome 19. The most frequent tRNA type was Arginine present in majority of the elements. Based on gained results, it was estimated that rabbit exhibits 15,866 copies having 137.73 Mb of elements accounting 6.16% of diploid genome (44 chromosomes). Further molecular analyses will be helpful in chromosomal localization and distribution of these elements on chromosomes.Keywords: rabbit, LTR retrotransposons, genome, chromosome
Procedia PDF Downloads 146462 Variability of the Speaker's Verbal and Non-Verbal Behaviour in the Process of Changing Social Roles in the English Marketing Discourse
Authors: Yuliia Skrynnik
Abstract:
This research focuses on the interaction of verbal, non-verbal, and super-verbal communicative components used by the speaker changing social roles in the marketing discourse. The changing/performing of social roles is implemented through communicative strategies and tactics, the structural, semantic, and linguo-pragmatic means of which are characterized by specific features and differ for the performance of either a role of a supplier or a customer. Communication within the marketing discourse is characterized by symmetrical roles’ relation between communicative opponents. The strategy of a supplier’s social role realization and the strategy of a customer’s role realization influence the discursive personality's linguistic repertoire in the marketing discourse. This study takes into account that one person can be both a supplier and a customer under different circumstances, thus, exploring the one individual who can be both a supplier and a customer. Cooperative and non-cooperative tactics are the instruments for the implementation of these strategies. In the marketing discourse, verbal and non-verbal behaviour of the speaker performing a customer’s social role is highly informative for speakers who perform the role of a supplier. The research methods include discourse, context-situational, pragmalinguistic, pragmasemantic analyses, the method of non-verbal components analysis. The methodology of the study includes 5 steps: 1) defining the configurations of speakers’ social roles on the selected material; 2) establishing the type of the discourse (marketing discourse); 3) describing the specific features of a discursive personality as a subject of the communication in the process of social roles realization; 4) selecting the strategies and tactics which direct the interaction in different roles configurations; 5) characterizing the structural, semantic and pragmatic features of the strategies and tactics realization, including the analysis of interaction between verbal and non-verbal components of communication. In the marketing discourse, non-verbal behaviour is usually spontaneous but not purposeful. Thus, the adequate decoding of a partner’s non-verbal behavior provides more opportunities both for the supplier and the customer. Super-verbal characteristics in the marketing discourse are crucial in defining the opponent's social status and social role at the initial stage of interaction. The research provides the scenario of stereotypical situations of the play of a supplier and a customer. The performed analysis has perspectives for further research connected with the study of discursive variativity of speakers' verbal and non-verbal behaviour considering the intercultural factor influencing the process of performing the social roles in the marketing discourse; and the formation of the methods for the scenario construction of non-stereotypical situations of social roles realization/change in the marketing discourse.Keywords: discursive personality, marketing discourse, non-verbal component of communication, social role, strategy, super-verbal component of communication, tactic, verbal component of communication
Procedia PDF Downloads 119461 Evaluating 8D Reports Using Text-Mining
Authors: Benjamin Kuester, Bjoern Eilert, Malte Stonis, Ludger Overmeyer
Abstract:
Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.Keywords: 8D report, complaint management, evaluation system, text-mining
Procedia PDF Downloads 312460 OSEME: A Smart Learning Environment for Music Education
Authors: Konstantinos Sofianos, Michael Stefanidakis
Abstract:
Nowadays, advances in information and communication technologies offer a range of opportunities for new approaches, methods, and tools in the field of education and training. Teacher-centered learning has changed to student-centered learning. E-learning has now matured and enables the design and construction of intelligent learning systems. A smart learning system fully adapts to a student's needs and provides them with an education based on their preferences, learning styles, and learning backgrounds. It is a wise friend and available at any time, in any place, and with any digital device. In this paper, we propose an intelligent learning system, which includes an ontology with all elements of the learning process (learning objects, learning activities) and a massive open online course (MOOC) system. This intelligent learning system can be used in music education.Keywords: intelligent learning systems, e-learning, music education, ontology, semantic web
Procedia PDF Downloads 309459 Visualization-Based Feature Extraction for Classification in Real-Time Interaction
Authors: Ágoston Nagy
Abstract:
This paper introduces a method of using unsupervised machine learning to visualize the feature space of a dataset in 2D, in order to find most characteristic segments in the set. After dimension reduction, users can select clusters by manual drawing. Selected clusters are recorded into a data model that is used for later predictions, based on realtime data. Predictions are made with supervised learning, using Gesture Recognition Toolkit. The paper introduces two example applications: a semantic audio organizer for analyzing incoming sounds, and a gesture database organizer where gestural data (recorded by a Leap motion) is visualized for further manipulation.Keywords: gesture recognition, machine learning, real-time interaction, visualization
Procedia PDF Downloads 351458 Semantic Analysis of the Change in Awareness of Korean College Admission Policy
Authors: Sujin Hwang, Hyerang Park, Hyunchul Kim
Abstract:
The purpose of this study is to find the effectiveness of the admission simplification policy. The number of online news articles about ‘high school record’ was collected and semantically analyzed to identify and analyze the social awareness during 2014 to 2015. The main results of the study are as follows: First, there was a difference in expectations that the burden of the examinees would decrease as announced by KCUE. Thus, there was still a strain on the university entrance exam after the enforcement of the policy. Second, private tutoring is expanding in different forms, rather than reducing the policy. It is different from the prediction that examinees can prepare for university admissions without the private tutoring. Thus, the college admission rules currently enforced needs to be improved. The reasonable college admission system changes are discussed.Keywords: education policy, private tutoring, shadow education, education admission policy
Procedia PDF Downloads 225457 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: computer vision, deep learning, object detection, semiconductor
Procedia PDF Downloads 134456 English Loanwords in Nigerian Languages: Sociolinguistic Survey
Authors: Surajo Ladan
Abstract:
English has been in existence in Nigeria since colonial period. The advent of English in Nigeria has caused a lot of linguistic changes in Nigerian languages especially among the educated elites and to some extent, even the ordinary people were not spared from this phenomenon. This scenario has generated a linguistic situation which culminated into the creation of Nigerian Pidgin that are conglomeration of English and other Nigerian languages. English has infiltrated the Nigerian languages to a point that a typical Nigerian can hardly talk without code-switching or using one English word or the other. The existence of English loanwords in Nigerian languages has taken another dimension in this scientific and technological age. Most of scientific and technological inventions are products of English language which are virtually adopted into the languages with phonological, morphological, and sometimes semantic variations. This paper is of the view that there should be a re-think and agitation from Nigerians to protect their languages from the linguistic genocide of English which are invariably facing extinction.Keywords: linguistic change, loanword, phenomenon, pidgin
Procedia PDF Downloads 860455 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 149454 EcoLife and Greed Index Measurement: An Alternative Tool to Promote Sustainable Communities and Eco-Justice
Authors: Louk Aourelien Andrianos, Edward Dommen, Athena Peralta
Abstract:
Greed, as epitomized by overconsumption of natural resources, is at the root of ecological destruction and unsustainability of modern societies. Presently economies rely on unrestricted structural greed which fuels unlimited economic growth, overconsumption, and individualistic competitive behavior. Structural greed undermines the life support system on earth and threatens ecological integrity, social justice and peace. The World Council of Churches (WCC) has developed a program on ecological and economic justice (EEJ) with the aim to promote an economy of life where the economy is embedded in society and society in ecology. This paper aims at analyzing and assessing the economy of life (EcoLife) by offering an empirical tool to measure and monitor the root causes and effects of unsustainability resulting from human greed on global, national, institutional and individual levels. This holistic approach is based on the integrity of ecology and economy in a society founded on justice. The paper will discuss critical questions such as ‘what is an economy of life’ and ‘how to measure and control it from the effect of greed’. A model called GLIMS, which stands for Greed Lines and Indices Measurement System is used to clarify the concept of greed and help measuring the economy of life index by fuzzy logic reasoning. The inputs of the model are from statistical indicators of natural resources consumption, financial realities, economic performance, social welfare and ethical and political facts. The outputs are concrete measures of three primary indices of ecological, economic and socio-political greed (ECOL-GI, ECON-GI, SOCI-GI) and one overall multidimensional economy of life index (EcoLife-I). EcoLife measurement aims to build awareness of an economy life and to address the effects of greed in systemic and structural aspects. It is a tool for ethical diagnosis and policy making.Keywords: greed line, sustainability indicators, fuzzy logic, eco-justice, World Council of Churches (WCC)
Procedia PDF Downloads 319453 Transcriptomic Analyses of Kappaphycus alvarezii under Different Wavelengths of Light
Authors: Vun Yee Thien, Kenneth Francis Rodrigues, Clemente Michael Vui Ling Wong, Wilson Thau Lym Yong
Abstract:
Transcriptomes associated with the process of photosynthesis have offered insights into the mechanism of gene regulation in terrestrial plants; however, limited information is available as far as macroalgae are concerned. This investigation aims to decipher the underlying mechanisms associated with photosynthesis in the red alga, Kappaphycus alvarezii, by performing a differential expression analysis on a de novo assembled transcriptomes. Comparative analysis of gene expression was designed to examine the alteration of light qualities and its effect on physiological mechanisms in the red alga. High-throughput paired-end RNA-sequencing was applied to profile the transcriptome of K. alvarezii irradiated with different wavelengths of light (blue 492-455 nm, green 577-492 nm and red 780-622 nm) as compared to the full light spectrum, resulted in more than 60 million reads individually and assembled using Trinity and SOAPdenovo-Trans. The transcripts were annotated in the NCBI non-redundant (nr) protein, SwissProt, KEGG and COG databases with a cutoff E-value of 1e-5 and nearly 30% of transcripts were assigned to functional annotation by Blast searches. Differential expression analysis was performed using edgeR. The DEGs were designated to six categories: BL (blue light) regulated, GL (green light) regulated, RL (red light) regulated, BL or GL regulated, BL or RL regulated, GL or RL regulated, and either BL, GL or RL regulated. These DEGs were mapped to terms in KEGG database and compared with the whole transcriptome background to search for genes that regulated by light quality. The outcomes of this study will enhance our understanding of molecular mechanisms underlying light-induced responses in red algae.Keywords: de novo transcriptome sequencing, differential gene expression, Kappaphycus alvareziired, red alga
Procedia PDF Downloads 506452 A Decentralized Application for Secure Data Handling of Wireless Networks Using Ethereum Smart Contracts
Authors: Midhun Xavier
Abstract:
This paper introduces a method to verify multi-agent systems in industrial control systems using blockchain technology. The proposed solution enables to record and verify each process that occurs while generating a customized product using Ethereum-based smart contracts. Node-Red software agents are developed with the help of semantic web technologies, and these software agents interact with IEC 61499 function blocks to execute the processes. The agent associated with each mechatronic component and its controller can communicate with the blockchain to record various events that occur during each process, and the latter smart contract helps to verify these process orders of the customized product.Keywords: blockchain, Ethereum, node-red, IEC 61499, multi-agent system, MQTT
Procedia PDF Downloads 93451 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 112450 Ischemic Stroke Detection in Computed Tomography Examinations
Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina
Abstract:
Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means
Procedia PDF Downloads 366449 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 310448 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 127447 A Proposed Approach for Emotion Lexicon Enrichment
Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees
Abstract:
Document Analysis is an important research field that aims to gather the information by analyzing the data in documents. As one of the important targets for many fields is to understand what people actually want, sentimental analysis field has been one of the vital fields that are tightly related to the document analysis. This research focuses on analyzing text documents to classify each document according to its opinion. The aim of this research is to detect the emotions from text documents based on enriching the lexicon with adapting their content based on semantic patterns extraction. The proposed approach has been presented, and different experiments are applied by different perspectives to reveal the positive impact of the proposed approach on the classification results.Keywords: document analysis, sentimental analysis, emotion detection, WEKA tool, NRC lexicon
Procedia PDF Downloads 441446 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator
Authors: Kaushikk Iyer, Richard M Hall, David Keeling
Abstract:
Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator
Procedia PDF Downloads 168445 The Cultural and Semantic Danger of English Transparent Words Translated from English into Arabic
Authors: Abdullah Khuwaileh
Abstract:
While teaching and translating vocabulary is no longer a neglected area in ELT in general and in translation in particular, the psychology of its acquisition has been a neglected area. Our paper aims at exploring some of the learning and translating conditions under which vocabulary is acquired and translated properly. To achieve this objective, two teaching methods (experiments) were applied on 4 translators to measure their acquisition of a number of transparent vocabulary items. Some of these items were knowingly chosen from 'deceptively transparent words'. All the data, sample, etc., were taken from Jordan University of Science and Technology (JUST) and Yarmouk University, where the researcher is employed. The study showed that translators might translate transparent words inaccurately, particularly if these words are uncontextualised. It was also shown that the morphological structures of words may lead translators or even EFL learners to misinterpretations of meaning.Keywords: english, transparent, word, processing, translation
Procedia PDF Downloads 70444 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 223443 Results and Insights from a Developmental Psychology Study on the Presentation of Juvenility in Punk Fanzines
Authors: Marc Dietrich
Abstract:
Youth cultures like Punk as much as media relevant to the specific scenes associated with them offer ample opportunity for young people or juvenile adults to construct their personal identities. However, developmental psychology has largely neglected such identity construction processes during the last decades. Such was not always the case: Early developmental psychologists intensely studied youth cultures and their meaningful objects and media in the early 20th century but lost interest when cultural studies and the social sciences occupied the field after World War II. Our project Constructions of Juvenility and Generation(ality), funded by the German Federal Ministry for Education and Research, reintegrates the study of youth cultures and their meaningful objects and media in a developmental psychology perspective. We present an empirical study of the ways in which youth, juvenility, and generation (ality) are constructed and negotiated in underground media like punk fanzines (a portmanteau of fan and magazine), including both semantic and aesthetic aspects of these construction processes within punk culture. The fanzine sample was accessed by the theoretical sampling strategy typical for GTM studies. Acknowledging fanzines as artful self-produced media by scene members for scene members, we conceptualize them as authentic documents of scene norms and values. Drawing on an analysis of both text and (cover) images in Punk fanzines published in Germany (and within a sample dating from 1981 until 2015) using a novel Visual Grounded Theory approach, we found that: a) Juvenility is a highly contested concept in punk culture. Its semantic quality and valuation varies with the perspectives present within the culture (e.g. embryo punks versus older punks); b) Juvenility is constructed as having energy and being socio-critical that does not depend on biological age; c) Juvenility is regarded not an ideal per se in German Punk culture; Punk culture constructs old age in a largely positive way (e.g., as marker of being real and a historical innovator); d) Juvenility is constructed as a habit that should be kept for life as it is constantly adapted to individual biographical trajectories like specific job situations or having a family. Consequently, identity negotiation as documented in the zines attempts to balance subculturally driven perspectives on life and society with the pragmatic requirements of a bourgeois life. The proposed paper will present the main results of this large-scale study of punk fanzines and show how developmental psychology perspectives as represented in the novel methodology applied in it can advance the study of youth cultures.Keywords: construction of juvenility, developmental psychology, visual GTM, youth culture, fanzines
Procedia PDF Downloads 292442 Human Behavior Modeling in Video Surveillance of Conference Halls
Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini
Abstract:
In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.Keywords: activity modeling, clustering, PLSA, video representation
Procedia PDF Downloads 392