Search results for: threshold models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7206

Search results for: threshold models

5286 Studying the Impact of Soil Characteristics in Displacement of Retaining Walls Using Finite Element

Authors: Mojtaba Ahmadabadi, Akbar Masoudi, Morteza Rezai

Abstract:

In this paper, using the finite element method, the effect of soil and wall characteristics was investigated. Thirty and two different models were studied by different parameters. These studies could calculate displacement at any height of the wall for frictional-cohesive soils. The main purpose of this research is to determine the most effective soil characteristics in reducing the wall displacement. Comparing different models showed that the overall increase in internal friction angle, angle of friction between soil and wall and modulus of elasticity reduce the replacement of the wall. In addition, increase in special weight of soil will increase the wall displacement. Based on results, it can be said that all wall displacements were overturning and in the backfill, soil was bulging. Results show that the highest impact is seen in reducing wall displacement, internal friction angle, and the angle friction between soil and wall. One of the advantages of this study is taking into account all the parameters of the soil and walls replacement distribution in wall and backfill soil. In this paper, using the finite element method and considering all parameters of the soil, we investigated the impact of soil parameter in wall displacement. The aim of this study is to provide the best conditions in reducing the wall displacement and displacement wall and soil distribution.

Keywords: retaining wall, fem, soil and wall interaction, angle of internal friction of the soil, wall displacement

Procedia PDF Downloads 378
5285 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 82
5284 Artificial Neurons Based on Memristors for Spiking Neural Networks

Authors: Yan Yu, Wang Yu, Chen Xintong, Liu Yi, Zhang Yanzhong, Wang Yanji, Chen Xingyu, Zhang Miaocheng, Tong Yi

Abstract:

Neuromorphic computing based on spiking neural networks (SNNs) has emerged as a promising avenue for building the next generation of intelligent computing systems. Owing to its high-density integration, low power, and outstanding nonlinearity, memristors have attracted emerging attention on achieving SNNs. However, fabricating a low-power and robust memristor-based spiking neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a TiO₂-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, used to realize single layer fully connected (FC) SNNs. Moreover, our TiO₂-based resistive switching (RS) memristors realize spiking-time-dependent-plasticity (STDP), originating from the Ag diffusion-based filamentary mechanism. This work demonstrates that TiO2-based memristors may provide an efficient method to construct hardware neuromorphic computing systems.

Keywords: leaky integrate-and-fire, memristor, spiking neural networks, spiking-time-dependent-plasticity

Procedia PDF Downloads 117
5283 Creating Energy Sustainability in an Enterprise

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.

Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure

Procedia PDF Downloads 93
5282 End-to-End Spanish-English Sequence Learning Translation Model

Authors: Vidhu Mitha Goutham, Ruma Mukherjee

Abstract:

The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.

Keywords: attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation

Procedia PDF Downloads 162
5281 Corporate Governance of State-Owned Enterprises: A Comparative Analysis

Authors: Adeyemi Adebayo, Barry Ackers

Abstract:

This paper comparatively analyses the corporate governance of SOEs in South Africa and Singapore in the context of the World Bank’s framework for corporate governance of SOEs. This framework ensured that the analysis holistically covered key aspects of corporate governance of SOEs in these states. In order to ground our understanding of the paths taken by SOEs in the states, the paper presents the evolution and reforms of SOEs in the states before analyzing key aspects of their corporate governance. The analysis shows that even though SOEs in South Africa and Singapore are comparable in a number of ways, there are notable differences. In this context, this paper finds that the main difference between corporate governance of SOEs in South Africa and Singapore is their organizing model. Further, the analysis, among other findings, shows that SOEs Boards in Singapore are better remunerated. Further finding reveals that, even though some board members are politically connected, Singaporean SOEs boards are better constituted based on skills and experience compared to SOEs boards in South Africa. Overall, the analysis opens up new debates and as such concludes by providing avenues for further research.

Keywords: corporate governance, comparative corporate governance, corporate governance framework, government business enterprises, government linked companies, organizing models, ownership models, state-owned companies, state-owned enterprises

Procedia PDF Downloads 203
5280 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 355
5279 Virtual Modelling of Turbulent Fibre Flow in a Low Consistency Refiner for a Sustainable and Energy Efficient Process

Authors: Simon Ingelsten, Anton Lundberg, Vijay Shankar, Lars-Olof Landström, Örjan Johansson

Abstract:

The flow in a low consistency disc refiner is simulated with the aim of identifying flow structures possibly being of importance for a future study to optimise the energy efficiency in refining processes. A simplified flow geometry is used, where a single groove of a refiner disc is modelled. Two different fibre models are used to simulate turbulent fibre suspension flow in the groove. The first model is a Bingham viscoplastic fluid model where the fibre suspension is treated as a non-Newtonian fluid with a yield stress. The second model is a new model proposed in a recent study where the suspended fibres effect on flow is accounted for through a modelled orientation distribution function (ODF). Both models yielded similar results with small differences. Certain flow characteristics that were expected and that was found in the literature were identified. Some of these flow characteristics may be of importance in a future process to optimise the refiner geometry to increase the energy efficiency. Further study and a more detailed flow model is; however, needed in order for the simulations to yield results valid for quantitative use in such an optimisation study. An outline of the next steps in such a study is proposed.

Keywords: disc refiner, fibre flow, sustainability, turbulence modelling

Procedia PDF Downloads 391
5278 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 140
5277 How Unicode Glyphs Revolutionized the Way We Communicate

Authors: Levi Corallo

Abstract:

Typed language made by humans on computers and cell phones has made a significant distinction from previous modes of written language exchanges. While acronyms remain one of the most predominant markings of typed language, another and perhaps more recent revolution in the way humans communicate has been with the use of symbols or glyphs, primarily Emojis—globally introduced on the iPhone keyboard by Apple in 2008. This paper seeks to analyze the use of symbols in typed communication from both a linguistic and machine learning perspective. The Unicode system will be explored and methods of encoding will be juxtaposed with the current machine and human perception. Topics in how typed symbol usage exists in conversation will be explored as well as topics across current research methods dealing with Emojis like sentiment analysis, predictive text models, and so on. This study proposes that sequential analysis is a significant feature for analyzing unicode characters in a corpus with machine learning. Current models that are trying to learn or translate the meaning of Emojis should be starting to learn using bi- and tri-grams of Emoji, as well as observing the relationship between combinations of different Emoji in tandem. The sociolinguistics of an entire new vernacular of language referred to here as ‘typed language’ will also be delineated across my analysis with unicode glyphs from both a semantic and technical perspective.

Keywords: unicode, text symbols, emojis, glyphs, communication

Procedia PDF Downloads 184
5276 Qsar Studies of Certain Novel Heterocycles Derived From bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, triazoles, novel heterocycles

Procedia PDF Downloads 431
5275 Language Development and Growing Spanning Trees in Children Semantic Network

Authors: Somayeh Sadat Hashemi Kamangar, Fatemeh Bakouie, Shahriar Gharibzadeh

Abstract:

In this study, we target to exploit Maximum Spanning Trees (MST) of children's semantic networks to investigate their language development. To do so, we examine the graph-theoretic properties of word-embedding networks. The networks are made of words children learn prior to the age of 30 months as the nodes and the links which are built from the cosine vector similarity of words normatively acquired by children prior to two and a half years of age. These networks are weighted graphs and the strength of each link is determined by the numerical similarities of the two words (nodes) on the sides of the link. To avoid changing the weighted networks to the binaries by setting a threshold, constructing MSTs might present a solution. MST is a unique sub-graph that connects all the nodes in such a way that the sum of all the link weights is maximized without forming cycles. MSTs as the backbone of the semantic networks are suitable to examine developmental changes in semantic network topology in children. From these trees, several parameters were calculated to characterize the developmental change in network organization. We showed that MSTs provides an elegant method sensitive to capture subtle developmental changes in semantic network organization.

Keywords: maximum spanning trees, word-embedding, semantic networks, language development

Procedia PDF Downloads 129
5274 Excitation Modeling for Hidden Markov Model-Based Speech Synthesis Based on Wavelet Analysis

Authors: M. Kiran Reddy, K. Sreenivasa Rao

Abstract:

The conventional Hidden Markov Model (HMM)-based speech synthesis system (HTS) uses only a pulse excitation model, which significantly differs from natural excitation signal. Hence, buzziness can be perceived in the speech generated using HTS. This paper proposes an efficient excitation modeling method that can significantly reduce the buzziness, and improve the quality of HMM-based speech synthesis. The proposed approach models the pitch-synchronous residual frames extracted from the residual excitation signal. Each pitch synchronous residual frame is parameterized using 30 wavelet coefficients. These 30 wavelet coefficients are found to accurately capture the perceptually important information present in the residual waveform. In synthesis phase, the residual frames are reconstructed from the generated wavelet coefficients and are pitch-synchronously overlap-added to generate the excitation signal. The proposed excitation modeling method is integrated into HMM-based speech synthesis system. Evaluation results indicate that the speech synthesized by the proposed excitation model is significantly better than the speech generated using state-of-the-art excitation modeling methods.

Keywords: excitation modeling, hidden Markov models, pitch-synchronous frames, speech synthesis, wavelet coefficients

Procedia PDF Downloads 234
5273 Naked Machismo: Uncovered Masculinity in an Israeli Home Design Campaign

Authors: Gilad Padva, Sigal Barak Brandes

Abstract:

This research centers on an unexpected Israeli advertising campaign for Elemento, a local furniture company, which eroticizes male nudity. The discussed campaign includes a series of printed ads that depict naked male models in effeminate positions. This campaign included a series of ads published in Haaretz, a small-scaled yet highly prestigious daily newspaper which is typically read by urban middle-upper-class left-winged Israelis. Apparently, this campaign embodies an alternative masculinity that challenges the prevalent machismo in Israeli society and advertising. Although some of the ads focus on young men in effeminate positions, they never expose their genitals and anuses, and their bodies are never permeable. The 2010s Elemento male models are seemingly contrasted to conventional representation of manhood in contemporary mainstream advertising. They display a somewhat inactive, passive and self-indulgent masculinity which involves 'conspicuous leisure'. In the process of commodity fetishism, the advertised furniture are emptied of the original meaning of their production, and then filled with new meanings in ways that both mystify the product and turn it into a fetish object. Yet, our research critically reconsiders this sensational campaign as sophisticated patriarchal parody that does not subvert but rather reconfirms and even fetishizes patriarchal premises; it parodizes effeminacy rather than the prevalent (Israeli) machismo. Following Pierre Bourdieu's politics of cultural taste, our research reconsiders and criticizes the male models' domesticated masculinity in a fantasized and cosmopolitan hedonistic habitus. Notwithstanding, we suggest that the Elemento campaign, despite its conformity, does question some Israeli and global axioms about gender roles, corporeal ideologies, idealized bodies, and domesticated phalluses and anuses. Although the naked truth is uncovered by this campaign, it does erect a vibrant discussion of contemporary masculinities and their exploitation in current mass consumption.

Keywords: male body, campaign, advertising, gender studies, men's studies, Israeli culture, masculinity, parody, effeminacy

Procedia PDF Downloads 199
5272 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 54
5271 Unveiling Game Designers’ Designing Practices: Five-Essential-Steps Model

Authors: Mifrah Ahmad

Abstract:

Game designing processes vary with the intentions of the game. Digital games have versatile starting and finishing processes and these have been reported throughout the literature over decades. However, the need to understand how game designers’ practice in designing games is approached in the industry and how do they approach designing games is yet to be informed and whether they consider existing models or frameworks in their practice to assist their designing process of games. Therefore, this paper discusses 17 game designers’ participants' perspectives on how they approach designing games and how their experience of designing various games influences their practice. This research is conducted in an Australian context, through a phenomenology approach, where semi-structured interviews were designed and grounded by theory of experience by John Dewey. The audio data collected was analyzed using NVivo and interpreted using the interpretivism paradigm to contextualize the essence of game designers’ experiences in their practice and unfold their designing, developing, and iterative methodologies. As a result, a generic game-designing model is proposed that illuminates a sequence of steps that enables game designers’ initiatives toward a successful game design process. A ‘Five-Essential-Steps’ model (5ESM) for designing digital games may potentially assist early career game designers, gaming researchers as well as academics pursuing the designing process of games, educational games, or serious games.

Keywords: game designers practice, experiential design, designing models, game design approaches, designing process, software design, top-down model

Procedia PDF Downloads 35
5270 Development of an Information System Based on the Establishment and Evaluation of Performance Rating by Application Part/Type of Remodeling Element Technologies

Authors: Sungwon Jung

Abstract:

The percentage of 20 years or older apartment houses in South Korea is approximately 20% (1.55 million houses), and the explosive increase of aged houses is expected around the first planned new towns. Accordingly, we should prepare for social issues such as difficulty of housing lease and degradation of housing performance. The improvement of performance of aged houses is essential for achieving the national energy and carbon reduction goals, and we should develop techniques to respond to the changing construction environment. Furthermore, we should develop a performance evaluation system that is appropriate for the demands of residents such as the improvement of remodeling floor plan by performance improvement in line with the residence type of the housing vulnerable groups such as low-income group and elderly people living alone. For this purpose, remodeling techniques and business models optimized for the target complexes must be spread through the development of various business models. In addition, it is necessary to improve the remodeling business by improving the laws and systems related to the improvement of the residential performance and to prepare techniques to respond to the increasing business demands. In other words, performance improvement and evaluation and knowledge systems need to be researched as new issues related to remodeling that has not been addressed in the existing research.

Keywords: remodelling, performance evaluation, web-based system, big data

Procedia PDF Downloads 214
5269 Conceptualizing the Cyber Insecurity Risk in the Ethics of Automated Warfare

Authors: Otto Kakhidze, Hoda Alkhzaimi, Adam Ramey, Nasir Memon

Abstract:

This paper provides an alternative, cyber security based a conceptual framework for the ethics of automated warfare. The large body of work produced on fully or partially autonomous warfare systems tends to overlook malicious security factors as in the possibility of technical attacks on these systems when it comes to the moral and legal decision-making. The argument provides a risk-oriented justification to why technical malicious risks cannot be dismissed in legal, ethical and policy considerations when warfare models are being implemented and deployed. The assumptions of the paper are supported by providing a broader model that contains the perspective of technological vulnerabilities through the lenses of the Game Theory, Just War Theory as well as standard and non-standard defense ethics. The paper argues that a conventional risk-benefit analysis without considering ethical factors is insufficient for making legal and policy decisions on automated warfare. This approach will provide the substructure for security and defense experts as well as legal scholars, ethicists and decision theorists to work towards common justificatory grounds that will accommodate the technical security concerns that have been overlooked in the current legal and policy models.

Keywords: automated warfare, ethics of automation, inherent hijacking, security vulnerabilities, risk, uncertainty

Procedia PDF Downloads 348
5268 MhAGCN: Multi-Head Attention Graph Convolutional Network for Web Services Classification

Authors: Bing Li, Zhi Li, Yilong Yang

Abstract:

Web classification can promote the quality of service discovery and management in the service repository. It is widely used to locate developers desired services. Although traditional classification methods based on supervised learning models can achieve classification tasks, developers need to manually mark web services, and the quality of these tags may not be enough to establish an accurate classifier for service classification. With the doubling of the number of web services, the manual tagging method has become unrealistic. In recent years, the attention mechanism has made remarkable progress in the field of deep learning, and its huge potential has been fully demonstrated in various fields. This paper designs a multi-head attention graph convolutional network (MHAGCN) service classification method, which can assign different weights to the neighborhood nodes without complicated matrix operations or relying on understanding the entire graph structure. The framework combines the advantages of the attention mechanism and graph convolutional neural network. It can classify web services through automatic feature extraction. The comprehensive experimental results on a real dataset not only show the superior performance of the proposed model over the existing models but also demonstrate its potentially good interpretability for graph analysis.

Keywords: attention mechanism, graph convolutional network, interpretability, service classification, service discovery

Procedia PDF Downloads 125
5267 A Fuzzy Analytic Hierarchy Process Approach for the Decision of Maintenance Priorities of Building Entities: A Case Study in a Facilities Management Company

Authors: Wai Ho Darrell Kwok

Abstract:

Building entities are valuable assets of a society, however, all of them are suffered from the ravages of weather and time. Facilitating onerous maintenance activities is the only way to either maintain or enhance the value and contemporary standard of the premises. By the way, maintenance budget is always bounded by the corresponding threshold limit. In order to optimize the limited resources allocation in carrying out maintenance, there is a substantial need to prioritize maintenance work. This paper reveals the application of Fuzzy AHP in a Facilities Management Company determining the maintenance priorities on the basis of predetermined criteria, viz., Building Status (BS), Effects on Fabrics (EF), Effects on Sustainability (ES), Effects on Users (EU), Importance of Usage (IU) and Physical Condition (PC) in dealing with categorized 8 predominant building components maintenance aspects for building premises. From the case study, it is found that ‘building exterior repainting or re-tiling’, ‘spalling concrete repair works among exterior area’ and ‘lobby renovation’ are the top three maintenance priorities from facilities manager and maintenance expertise personnel. Through the application of the Fuzzy AHP for maintenance priorities decision algorithm, a more systemic and easier comparing scalar linearity factors being explored even in considering other multiple criteria decision scenarios of building maintenance issue.

Keywords: building maintenance, fuzzy AHP, maintenance priority, multi-criteria decision making

Procedia PDF Downloads 227
5266 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors

Authors: Katawut Kaewbanjong

Abstract:

We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.

Keywords: prediction model, statistical analysis, software project, user satisfaction factor

Procedia PDF Downloads 110
5265 Evaluating India's Smart Cities against the Sustainable Development Goals

Authors: Suneet Jagdev

Abstract:

17 Sustainable Development Goals were adopted by the world leaders in September 2015 at the United Nations Sustainable Development Summit. These goals were adopted by UN member states to promote prosperity, health and human rights while protecting the planet. Around the same time, the Government of India launched the Smart City Initiative to speed up development of state of the art infrastructure and services in 100 cities with a focus on sustainable and inclusive development. These cities are meant to become role models for other cities in India and promote sustainable regional development. This paper examines goals set under the Smart City Initiative and evaluates them in terms of the Sustainable Development Goals, using case studies of selected Smart Cities in India. The study concludes that most Smart City projects at present actually consist of individual solutions to individual problems identified in a community rather than comprehensive models for complex issues in cities across India. Systematic, logical and comparative analysis of important literature and data has been done, collected from government sources, government papers, research papers by various experts on the topic, and results from some online surveys. Case studies have been used for a graphical analysis highlighting the issues of migration, ecology, economy and social equity in these Smart Cities.

Keywords: housing, migration, smart cities, sustainable development goals, urban infrastructure

Procedia PDF Downloads 394
5264 Development of Time Series Forecasting Model for Dengue Cases in Nakhon Si Thammarat, Southern Thailand

Authors: Manit Pollar

Abstract:

Identifying the dengue epidemic periods early would be helpful to take necessary actions to prevent the dengue outbreaks. Providing an accurate prediction on dengue epidemic seasons will allow sufficient time to take the necessary decisions and actions to safeguard the situation for local authorities. This study aimed to develop a forecasting model on number of dengue incidences in Nakhon Si Thammarat Province, Southern Thailand using time series analysis. We develop Seasonal Autoregressive Moving Average (SARIMA) models on the monthly data collected between 2003-2011 and validated the models using data collected between January-September 2012. The result of this study revealed that the SARIMA(1,1,0)(1,2,1)12 model closely described the trends and seasons of dengue incidence and confirmed the existence of dengue fever cases in Nakhon Si Thammarat for the years between 2003-2011. The study showed that the one-step approach for predicting dengue incidences provided significantly more accurate predictions than the twelve-step approach. The model, even if based purely on statistical data analysis, can provide a useful basis for allocation of resources for disease prevention.

Keywords: SARIMA, time series model, dengue cases, Thailand

Procedia PDF Downloads 343
5263 Aircraft Automatic Collision Avoidance Using Spiral Geometric Approach

Authors: M. Orefice, V. Di Vito

Abstract:

This paper provides a description of a Collision Avoidance algorithm that has been developed starting from the mathematical modeling of the flight of insects, in terms of spirals and conchospirals geometric paths. It is able to calculate a proper avoidance manoeuver aimed to prevent the infringement of a predefined distance threshold between ownship and the considered intruder, while minimizing the ownship trajectory deviation from the original path and in compliance with the aircraft performance limitations and dynamic constraints. The algorithm is designed in order to be suitable for real-time applications, so that it can be considered for the implementation in the most recent airborne automatic collision avoidance systems using the traffic data received through an ADS-B IN device. The presented approach is able to take into account the rules-of-the-air, due to the possibility to select, through specifically designed decision making logic based on the consideration of the encounter geometry, the direction of the calculated collision avoidance manoeuver that allows complying with the rules-of-the-air, as for instance the fundamental right of way rule. In the paper, the proposed collision avoidance algorithm is presented and its preliminary design and software implementation is described. The applicability of this method has been proved through preliminary simulation tests performed in a 2D environment considering single intruder encounter geometries, as reported and discussed in the paper.

Keywords: ADS-B Based Application, Collision Avoidance, RPAS, Spiral Geometry.

Procedia PDF Downloads 231
5262 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models

Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche

Abstract:

It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.

Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells

Procedia PDF Downloads 110
5261 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 406
5260 Performance Evaluation of Clustered Routing Protocols for Heterogeneous Wireless Sensor Networks

Authors: Awatef Chniguir, Tarek Farah, Zouhair Ben Jemaa, Safya Belguith

Abstract:

Optimal routing allows minimizing energy consumption in wireless sensor networks (WSN). Clustering has proven its effectiveness in organizing WSN by reducing channel contention and packet collision and enhancing network throughput under heavy load. Therefore, nowadays, with the emergence of the Internet of Things, heterogeneity is essential. Stable election protocol (SEP) that has increased the network stability period and lifetime is the first clustering protocol for heterogeneous WSN. SEP and its descendants, namely SEP, Threshold Sensitive SEP (TSEP), Enhanced TSEP (ETSSEP) and Current Energy Allotted TSEP (CEATSEP), were studied. These algorithms’ performance was evaluated based on different metrics, especially first node death (FND), to compare their stability. Simulations were conducted on the MATLAB tool considering two scenarios: The first one demonstrates the fraction variation of advanced nodes by setting the number of total nodes. The second considers the interpretation of the number of nodes while keeping the number of advanced nodes permanent. CEATSEP outperforms its antecedents by increasing stability and, at the same time, keeping a low throughput. It also operates very well in a large-scale network. Consequently, CEATSEP has a useful lifespan and energy efficiency compared to the other routing protocol for heterogeneous WSN.

Keywords: clustering, heterogeneous, stability, scalability, IoT, WSN

Procedia PDF Downloads 116
5259 Free Radical Dosimetry for Ultrasound in Terephthalic Acid Solutions Containing Gold Nanoparticles

Authors: Ahmad Shanei, Mohammad Mahdi Shanei

Abstract:

When a liquid is irradiated with high intensities (> 1 W) and low frequencies (≤ 1 MHz) ultrasound, acoustic cavitation occurs. Acoustic cavitation generates free radicals from the breakdown of water and other molecules. The existence of particles in liquid provide nucleation sites for cavitation bubbles and lead to decrease the ultrasonic intensity threshold needed for cavitation onset. The study was designed to measure hydroxyl radicals in terephthalic acid solutions containing 30 nm gold nanoparticles in a near field of a 1 MHz sonotherapy probe. The effect of ultrasound irradiation parameters containing mode of sonication and ultrasound intensity in hydroxyl radicals production have been investigated by the spectrofluorometry method. Recorded fluorescence signal in terephthalic acid solution containing gold nanoparticles was higher than the terephthalic acid solution without gold nanoparticles. Also, the results showed that any increase in intensity of the sonication would be associated with an increase in the fluorescence intensity. Acoustic cavitation in the presence of gold nanoparticles has been introduced as a way for improving therapeutic effects on the tumors. Also, the terephthalic acid dosimetry is suitable for detecting and quantifying free hydroxyl radicals as a criterion of cavitation production over a range of condition in medical ultrasound fields.

Keywords: acoustic cavitation, gold nanoparticle, chemical dosimetry, terephthalic acid

Procedia PDF Downloads 461
5258 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks

Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram

Abstract:

In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.

Keywords: backoff, contention window, CWMIDB, MANET

Procedia PDF Downloads 261
5257 Improving the Management Systems of the Ownership Risks in Conditions of Transformation of the Russian Economy

Authors: Mikhail V. Khachaturyan

Abstract:

The article analyzes problems of improving the management systems of the ownership risks in the conditions of the transformation of the Russian economy. Among the main sources of threats business owners should highlight is the inefficiency of the implementation of business models and interaction with hired managers. In this context, it is particularly important to analyze the relationship of business models and ownership risks. The analysis of this problem appears to be relevant for a number of reasons: Firstly, the increased risk appetite of the owner directly affects the business model and the composition of his holdings; secondly, owners with significant stakes in the company are factors in the formation of particular types of risks for owners, for which relations have a significant influence on a firm's competitiveness and ultimately determines its survival; and thirdly, inefficient system of management ownership of risk is one of the main causes of mass bankruptcies, which significantly affects the stable operation of the economy as a whole. The separation of the processes of possession, disposal and use in modern organizations is the cause of not only problems in the process of interaction between the owner and managers in managing the organization as a whole, but also the asymmetric information about the kinds and forms of the main risks. Managers tend to avoid risky projects, inhibit the diversification of the organization's assets, while owners can insist on the development of such projects, with the aim not only of creating new values for themselves and consumers, but also increasing the value of the company as a result of increasing capital. In terms of separating ownership and management, evaluation of projects by the ratio of risk-yield requires preservation of the influence of the owner on the process of development and making management decisions. It is obvious that without a clearly structured system of participation of the owner in managing the risks of their business, further development is hopeless. In modern conditions of forming a risk management system, owners are compelled to compromise between the desire to increase the organization's ability to produce new value, and, consequently, increase its cost due to the implementation of risky projects and the need to tolerate the cost of lost opportunities of risk diversification. Improving the effectiveness of the management of ownership risks may also contribute to the revitalization of creditors on implementation claims to inefficient owners, which ultimately will contribute to the efficiency models of ownership control to exclude variants of insolvency. It is obvious that in modern conditions, the success of the model of the ownership of risk management and audit is largely determined by the ability and willingness of the owner to find a compromise between potential opportunities for expanding the firm's ability to create new value through risk and maintaining the current level of new value creation and an acceptable level of risk through the use of models of diversification.

Keywords: improving, ownership risks, problem, Russia

Procedia PDF Downloads 335