Search results for: Maximum Likelihood Approach
5527 Customers 50+ Behavior in the Financial Market in the Czech Republic
Authors: K. Matušínská, H. Starzyczná, M. Stoklasa
Abstract:
The paper deals with behaviour of the segment 50+ in the financial market in the Czech Republic. This segment could be said as the strong market power and it can be a crucial business potential for financial business units. The main defined objective of this paper is analysis of the customers´ behaviour of the segment 50- 60 years in the financial market in the Czech Republic and proposal making of the suitable marketing approach to satisfy their demands in the area of product, price, distribution and marketing communication policy. This paper is based on data from one part of primary marketing research. Paper determinates the basic problem areas as well as definition of financial services marketing, defining the primary research problem, hypothesis and primary research methodology. Finally suitable marketing approach to selected sub segment at age of 50-60 years is proposed according to marketing research findings.
Keywords: Population aging in the Czech Republic, Segment 50-60 years, Financial services marketing, Marketing research, Marketing approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20475526 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.
Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7695525 A Neural Computing-Based Approach for the Early Detection of Hepatocellular Carcinoma
Authors: Marina Gorunescu, Florin Gorunescu, Kenneth Revett
Abstract:
Hepatocellular carcinoma, also called hepatoma, most commonly appears in a patient with chronic viral hepatitis. In patients with a higher suspicion of HCC, such as small or subtle rising of serum enzymes levels, the best method of diagnosis involves a CT scan of the abdomen, but only at high cost. The aim of this study was to increase the ability of the physician to early detect HCC, using a probabilistic neural network-based approach, in order to save time and hospital resources.Keywords: Early HCC diagnosis, probabilistic neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12665524 Wasting Human and Computer Resources
Authors: Mária Csernoch, Piroska Biró
Abstract:
The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.
Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19125523 Using Visual Technologies to Promote Excellence in Computer Science Education
Authors: Carol B. Collins, M. H. N Tabrizi
Abstract:
The purposes of this paper are to (1) promote excellence in computer science by suggesting a cohesive innovative approach to fill well documented deficiencies in current computer science education, (2) justify (using the authors' and others anecdotal evidence from both the classroom and the real world) why this approach holds great potential to successfully eliminate the deficiencies, (3) invite other professionals to join the authors in proof of concept research. The authors' experiences, though anecdotal, strongly suggest that a new approach involving visual modeling technologies should allow computer science programs to retain a greater percentage of prospective and declared majors as students become more engaged learners, more successful problem-solvers, and better prepared as programmers. In addition, the graduates of such computer science programs will make greater contributions to the profession as skilled problem-solvers. Instead of wearily rememorizing code as they move to the next course, students will have the problem-solving skills to think and work in more sophisticated and creative ways.Keywords: Algorithms, CASE, UML, Problem-solving.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16185522 The Effects of Negative Electronic Word-of-Mouth and Webcare on Thai Online Consumer Behavior
Authors: Pongsatorn Tantrabundit, Lersak Phothong, Ong-art Chanprasitchai
Abstract:
Due to the emergence of the Internet, it has extended the traditional Word-of-Mouth (WOM) to a new form called “Electronic Word-of-Mouth (eWOM).” Unlike traditional WOM, eWOM is able to present information in various ways by applying different components. Each eWOM component generates different effects on online consumer behavior. This research investigates the effects of Webcare (responding message) from product/ service providers on negative eWOM by applying two types of products (search and experience). The proposed conceptual model was developed based on the combination of the stages in consumer decision-making process, theory of reasoned action (TRA), theory of planned behavior (TPB), the technology acceptance model (TAM), the information integration theory and the elaboration likelihood model. The methodology techniques used in this study included multivariate analysis of variance (MANOVA) and multiple regression analysis. The results suggest that Webcare does slightly increase Thai online consumer’s perceptions on perceived eWOM trustworthiness, information diagnosticity and quality. For negative eWOM, we also found that perceived eWOM Trustworthiness, perceived eWOM diagnosticity and quality have a positive relationship with eWOM influence whereas perceived valence has a negative relationship with eWOM influence in Thai online consumers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14055521 Adaptive Fourier Decomposition Based Signal Instantaneous Frequency Computation Approach
Authors: Liming Zhang
Abstract:
There have been different approaches to compute the analytic instantaneous frequency with a variety of background reasoning and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based instantaneous frequency computation approach. The adaptive Fourier decomposition is a recently proposed new signal decomposition approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of the signal in most of the situation. A new instantaneous frequency definition for a large class of so-called simple waves is also proposed in this paper. Simple wave contains a wide range of signals for which the concept instantaneous frequency has a perfect physical sense. The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.Keywords: Adaptive Fourier decomposition, Fourier series, signal processing, instantaneous frequency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23625520 High Level Synthesis of Digital Filters Based On Sub-Token Forwarding
Authors: Iyad F. Jafar, Sandra J. Alrawashdeh, Ban K. Alhamayel
Abstract:
High level synthesis (HLS) is a process which generates register-transfer level design for digital systems from behavioral description. There are many HLS algorithms and commercial tools. However, most of these algorithms consider a behavioral description for the system when a single token is presented to the system. This approach does not exploit extra hardware efficiently, especially in the design of digital filters where common operations may exist between successive tokens. In this paper, we modify the behavioral description to process multiple tokens in parallel. However, this approach is unlike the full processing that requires full hardware replication. It exploits the presence of common operations between successive tokens. The performance of the proposed approach is better than sequential processing and approaches that of full parallel processing as the hardware resources are increased.Keywords: Digital filters, High level synthesis, Sub-token forwarding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14645519 Image Segmentation Using Suprathreshold Stochastic Resonance
Authors: Rajib Kumar Jha, P.K.Biswas, B.N.Chatterji
Abstract:
In this paper a new concept of partial complement of a graph G is introduced and using the same a new graph parameter, called completion number of a graph G, denoted by c(G) is defined. Some basic properties of graph parameter, completion number, are studied and upperbounds for completion number of classes of graphs are obtained , the paper includes the characterization also.
Keywords: Completion Number, Maximum Independent subset, Partial complements, Partial self complementary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12325518 An Application of Extreme Value Theory as a Risk Measurement Approach in Frontier Markets
Authors: Dany Ng Cheong Vee, Preethee Nunkoo Gonpot, Noor-Ul-Hacq Sookia
Abstract:
In this paper, we consider the application of Extreme Value Theory as a risk measurement tool. The Value at Risk, for a set of indices, from six Stock Exchanges of Frontier markets is calculated using the Peaks over Threshold method and the performance of the model index-wise is evaluated using coverage tests and loss functions. Our results show that “fattailedness” alone of the data is not enough to justify the use of EVT as a VaR approach. The structure of the returns dynamics is also a determining factor. This approach works fine in markets which have had extremes occurring in the past thus making the model capable of coping with extremes coming up (Colombo, Tunisia and Zagreb Stock Exchanges). On the other hand, we find that indices with lower past than present volatility fail to adequately deal with future extremes (Mauritius and Kazakhstan). We also conclude that using EVT alone produces quite static VaR figures not reflecting the actual dynamics of the data.
Keywords: Extreme Value theory, Financial Crisis 2008, Frontier Markets, Value at Risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23875517 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame
Authors: Seyed Saeid Tabaee, Omid Bahar
Abstract:
Nowadays, energy dissipation devices are commonly used in structures. High rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements, specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely make analysis and design process complicated. This effect may be generally represented by Equivalent Viscous Damping (EVD). The equivalent viscous damping might be obtained from the expected hysteretic behavior regarding to the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel Moment Resisting Frame (MRF), which its performance is enhanced by a Buckling Restrained Brace (BRB) system has been evaluated. Having foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural system frequency. Two MRF structures, one equipped with BRB and the other without BRB are simultaneously studied. Extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.Keywords: Buckling restrained brace, Direct displacement based design, Dual systems, Hysteretic damping, Moment resisting frames.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24785516 Bottom Up Text Mining through Hierarchical Document Representation
Authors: Y. Djouadi., F. Souam.
Abstract:
Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20595515 Secret Communications Using Synchronized Sixth-Order Chuas's Circuits
Authors: López-Gutiérrez R.M., Rodríguez-Orozco E., Cruz-Hernández C., Inzunza-González E., Posadas-Castillo C., García-Guerrero E.E., Cardoza-Avendaño L.
Abstract:
In this paper, we use Generalized Hamiltonian systems approach to synchronize a modified sixth-order Chua's circuit, which generates hyperchaotic dynamics. Synchronization is obtained between the master and slave dynamics with the slave being given by an observer. We apply this approach to transmit private information (analog and binary), while the encoding remains potentially secure.
Keywords: Hyperchaos synchronization, sixth-order Chua's circuit, observers, simulation, secure communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14995514 Using Textual Pre-Processing and Text Mining to Create Semantic Links
Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo
Abstract:
This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.Keywords: Semantic links, data mining, linked data, SKOS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10665513 A Comparative Study of Indoor Radon Concentrations between Dwellings and Workplaces in the Ko Samui District, Surat Thani Province, Southern Thailand
Authors: Kanokkan Titipornpun, Tripob Bhongsuwan, Jan Gimsa
Abstract:
The Ko Samui district of Surat Thani province is located in the high amounts of equivalent uranium in the ground surface that is the source of radon. Our research in the Ko Samui district aimed at comparing the indoor radon concentrations between dwellings and workplaces. Measurements of indoor radon concentrations were carried out in 46 dwellings and 127 workplaces, using CR-39 alpha-track detectors in closed-cup. A total of 173 detectors were distributed in 7 sub-districts. The detectors were placed in bedrooms of dwellings and workrooms of workplaces. All detectors were exposed to airborne radon for 90 days. After exposure, the alpha tracks were made visible by chemical etching before they were manually counted under an optical microscope. The track densities were assumed to be correlated with the radon concentration levels. We found that the radon concentrations could be well described by a log-normal distribution. Most concentrations (37%) were found in the range between 16 and 30 Bq.m-3. The radon concentrations in dwellings and workplaces varied from a minimum of 11 Bq.m-3 to a maximum of 305 Bq.m-3. The minimum (11 Bq.m-3) and maximum (305 Bq.m-3) values of indoor radon concentrations were found in a workplace and a dwelling, respectively. Only for four samples (3%), the indoor radon concentrations were found to be higher than the reference level recommended by the WHO (100 Bq.m-3). The overall geometric mean in the surveyed area was 32.6±1.65 Bq.m-3, which was lower than the worldwide average (39 Bq.m-3). The statistic comparison of the geometric mean indoor radon concentrations between dwellings and workplaces showed that the geometric mean in dwellings (46.0±1.55 Bq.m-3) was significantly higher than in workplaces (28.8±1.58 Bq.m-3) at the 0.05 level. Moreover, our study found that the majority of the bedrooms in dwellings had a closed atmosphere, resulting in poorer ventilation than in most of the workplaces that had access to air flow through open doors and windows at daytime. We consider this to be the main reason for the higher geometric mean indoor radon concentration in dwellings compared to workplaces.
Keywords: CR-39 detector, indoor radon, radon in dwelling, radon in workplace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9375512 Two Approaches to Code Mobility in an Agent-based E-commerce System
Authors: Costin Badica, Maria Ganzha, Marcin Paprzycki
Abstract:
Recently, a model multi-agent e-commerce system based on mobile buyer agents and transfer of strategy modules was proposed. In this paper a different approach to code mobility is introduced, where agent mobility is replaced by local agent creation supplemented by similar code mobility as in the original proposal. UML diagrams of agents involved in the new approach to mobility and the augmented system activity diagram are presented and discussed.
Keywords: Agent system, agent mobility, code mobility, e-commerce, UML formalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14365511 A Parallel Architecture for the Real Time Correction of Stereoscopic Images
Authors: Zohir Irki, Michel Devy
Abstract:
In this paper, we will present an architecture for the implementation of a real time stereoscopic images correction's approach. This architecture is parallel and makes use of several memory blocs in which are memorized pre calculated data relating to the cameras used for the acquisition of images. The use of reduced images proves to be essential in the proposed approach; the suggested architecture must so be able to carry out the real time reduction of original images.Keywords: Image reduction, Real-time correction, Parallel architecture, Parallel treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11105510 Optimization of R507A-R23 Cascade Refrigeration System using Genetic Algorithm
Authors: A. D. Parekh, P. R. Tailor, H.R Jivanramajiwala
Abstract:
The present work deals with optimization of cascade refrigeration system using eco friendly refrigerants pair R507A and R23. R507A is azeotropic mixture composed of HFC refrigerants R125/R143a (50%/50% by wt.). R23 is a single component HFC refrigerant used as replacement to CFC refrigerant R13 in low temperature applications. These refrigerants have zero ozone depletion potential and are non-flammable. Optimization of R507AR23 cascade refrigeration system performance parameters such as minimum work required, refrigeration effect, coefficient of performance and exergetic efficiency was carried out in terms of eight operating parameters- combinations using Genetic Algorithm tool. The eight operating parameters include (1) low side evaporator temperature (2) high side condenser temperature (3) temperature difference in the cascade heat exchanger (4) low side condenser temperature (5) low side degree of subcooling (6) high side degree of subcooling (7) low side degree of superheating (8) high side degree of superheating. Results show that for minimum work system should operate at high temperature in low side evaporator, low temperature in high side condenser, low temperature difference in cascade condenser, high temperature in low side condenser and low degree of subcooling and superheating in both side. For maximum refrigeration effect system should operate at high temperature in low side evaporator, high temperature in high side condenser, high temperature difference in cascade condenser, low temperature in low side condenser and higher degree of subcooling in LT and HT side. For maximum coefficient of performance and exergetic efficiency, system should operate at high temperature in low side evaporator, low temperature in high side condenser, low temperature difference in cascade condenser, high temperature in low side condenser and higher degree of subcooling and superheating in low side of the system.
Keywords: Cascade refrigeration system, Genetic Algorithm, R507A, R23,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21395509 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.
Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10605508 A New Self-Adaptive EP Approach for ANN Weights Training
Authors: Kristina Davoian, Wolfram-M. Lippe
Abstract:
Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.Keywords: Artificial Neural Networks (ANN), Learning Theory, Evolutionary Programming (EP), Mutation, Self-Adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18305507 A Genetic-Algorithm-Based Approach for Audio Steganography
Authors: Mazdak Zamani , Azizah A. Manaf , Rabiah B. Ahmad , Akram M. Zeki , Shahidan Abdullah
Abstract:
In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.
Keywords: Artificial Intelligence, Audio Steganography, DataHiding, Genetic Algorithm, Substitution Techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31205506 Genetic Programming Based Data Projections for Classification Tasks
Authors: César Estébanez, Ricardo Aler, José M. Valls
Abstract:
In this paper we present a GP-based method for automatically evolve projections, so that data can be more easily classified in the projected spaces. At the same time, our approach can reduce dimensionality by constructing more relevant attributes. Fitness of each projection measures how easy is to classify the dataset after applying the projection. This is quickly computed by a Simple Linear Perceptron. We have tested our approach in three domains. The experiments show that it obtains good results, compared to other Machine Learning approaches, while reducing dimensionality in many cases.
Keywords: Classification, genetic programming, projections.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14015505 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: Embedded code generation, embedded C code quality, embedded systems, model-based development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10105504 A Socio-Technical Approach to Cyber-Risk Assessment
Authors: Kitty Kioskli, Nineta Polemi
Abstract:
Evaluating the levels of cyber-security risks within an enterprise is most important in protecting its information system, services and all its digital assets against security incidents (e.g. accidents, malicious acts, massive cyber-attacks). The existing risk assessment methodologies (e.g. eBIOS, OCTAVE, CRAMM, NIST-800) adopt a technical approach considering as attack factors only the capability, intention and target of the attacker, and not paying attention to the attacker’s psychological profile and personality traits. In this paper, a socio-technical approach is proposed in cyber risk assessment, in order to achieve more realistic risk estimates by considering the personality traits of the attackers. In particular, based upon principles from investigative psychology and behavioural science, a multi-dimensional, extended, quantifiable model for an attacker’s profile is developed, which becomes an additional factor in the cyber risk level calculation.
Keywords: Attacker, behavioural models, cyber risk assessment, cyber-security, human factors, investigative psychology, ISO27001, ISO27005.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9755503 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.
Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5445502 A Flipped Classroom Approach for Non-Science Majors
Authors: Nidhi Gadura
Abstract:
To ensure student success in a non-majors biology course, a flipped classroom pedagogical approach was developed and implemented. All students were assigned online lectures to listen to before they come to class. A three hour lecture was split into one hour of online component, one hour of in class lecture and one hour of worksheets done by students in the classroom. This deviation from a traditional 3 hour in class lecture has resulted in increased student interest in science as well as better understanding of difficult scientific concepts. A pre and post survey was given to measure the interest in the subject and grades were used to measure the success rates. While the overall grade average did not change dramatically, students reported a much better appreciation of biology. Also, students overwhelmingly like the use of worksheets in class to help them understand the concepts. They liked the fact that they could listen to lectures at their own pace on line and even repeat if needed. The flipped classroom approach turned out to work really well our non-science majors and the author is ready to implement this in other classrooms.
Keywords: Flipped classroom, non-science majors, pedagogy, technological pedagogical model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20625501 Characteristic Function in Estimation of Probability Distribution Moments
Authors: Vladimir S. Timofeev
Abstract:
In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.
Keywords: Characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22555500 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method
Authors: Mamidi Ramakrishna Rao
Abstract:
Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.
Keywords: Direct search, DFIG, equivalent circuit parameters, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9075499 Unit Commitment Solution Methods
Authors: Sayeed Salam
Abstract:
An effort to develop a unit commitment approach capable of handling large power systems consisting of both thermal and hydro generating units offers a large profitable return. In order to be feasible, the method to be developed must be flexible, efficient and reliable. In this paper, various proposed methods have been described along with their strengths and weaknesses. As all of these methods have some sort of weaknesses, a comprehensive algorithm that combines the strengths of different methods and overcomes each other-s weaknesses would be a suitable approach for solving industry-grade unit commitment problem.Keywords: Unit commitment, Solution methods, and Comprehensive algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61765498 Systematic Approach for Energy-Supply-Orientated Production Planning
Authors: F. Keller, G. Reinhart
Abstract:
The efficient and economic allocation of resources is one main goal in the field of production planning and control. Nowadays, a new variable gains in importance throughout the planning process: Energy. Energy-efficiency has already been widely discussed in literature, but with a strong focus on reducing the overall amount of energy used in production. This paper provides a brief systematic approach, how energy-supply-orientation can be used for an energy-cost-efficient production planning and thus combining the idea of energy-efficiency and energy-flexibility.Keywords: Production planning and control, energy, efficiency, flexibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629