Search results for: imperialist competition algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4464

Search results for: imperialist competition algorithm

1344 Intellectual Capital as Resource Based Business Strategy

Authors: Vidya Nimkar Tayade

Abstract:

Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.

Keywords: innovation, protection, management, commercialization

Procedia PDF Downloads 167
1343 Personalized Email Marketing Strategy: A Reinforcement Learning Approach

Authors: Lei Zhang, Tingting Xu, Jun He, Zhenyu Yan

Abstract:

Email marketing is one of the most important segments of online marketing. It has been proved to be the most effective way to acquire and retain customers. The email content is vital to customers. Different customers may have different familiarity with a product, so a successful marketing strategy must personalize email content based on individual customers’ product affinity. In this study, we build our personalized email marketing strategy with three types of emails: nurture, promotion, and conversion. Each type of email has a different influence on customers. We investigate this difference by analyzing customers’ open rates, click rates and opt-out rates. Feature importance from response models is also analyzed. The goal of the marketing strategy is to improve the click rate on conversion-type emails. To build the personalized strategy, we formulate the problem as a reinforcement learning problem and adopt a Q-learning algorithm with variations. The simulation results show that our model-based strategy outperforms the current marketer’s strategy.

Keywords: email marketing, email content, reinforcement learning, machine learning, Q-learning

Procedia PDF Downloads 192
1342 Harmonic Data Preparation for Clustering and Classification

Authors: Ali Asheibi

Abstract:

The rapid increase in the size of databases required to store power quality monitoring data has demanded new techniques for analysing and understanding the data. One suggested technique to assist in analysis is data mining. Preparing raw data to be ready for data mining exploration take up most of the effort and time spent in the whole data mining process. Clustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. Large amounts of harmonic data have been collected from an actual harmonic monitoring system in a distribution system in Australia for three years. This amount of acquired data makes it difficult to identify operational events that significantly impact the harmonics generated on the system. In this paper, harmonic data preparation processes to better understanding of the data have been presented. Underlying classes in this data has then been identified using clustering technique based on the Minimum Message Length (MML) method. The underlying operational information contained within the clusters can be rapidly visualised by the engineers. The C5.0 algorithm was used for classification and interpretation of the generated clusters.

Keywords: data mining, harmonic data, clustering, classification

Procedia PDF Downloads 245
1341 Effect of Inoculum Ratio on Dark Fermentative Hydrogen Production

Authors: Zeynep Yilmazer Hitit, Patrick C. Hallenbeck

Abstract:

Fuel reserve requirements due to depletion of fossil fuels have increased interest in biohydrogen since the 1990’s. In fermentative hydrogen production, pure, mixed, and co-cultures can be used to produce hydrogen. Several previous studies have evaluated hydrogen production by pure cultures of Clostridium butyricum or Enterobacter aerogenes. Evaluating hydrogen production by co-culture of these microorganisms is an interestıng approach since E. aerogenes is a facultative microorganism with resistance to oxygen in contrast to the strict anaerobe C. butyricum, and therefore has the ability to maintain anaerobic conditions. It was found that using co-cultures of facultative E. aerogenes (as a reducing agent and H2 producer) and the obligate anaerobe C. butyricum for producing hydrogen increases the yield of hydrogen by about 50% compared to C. butyricum by itself. Also, using different types of microorganisms for hydrogen production eliminates the need to use expensive reducing agents. C. butyricum strain pre-cultured anaerobically at 37 0C for 15h by inoculating 100 mL of GP medium (pH 6.8) consisting of 1% glucose, 2% polypeptone, 0.2% KH2PO4, 0.05% yeast extract, 0.05% MgSO4. 7H2O and E. aerogenes strain was pre-cultured aerobically at 30 0C, 150 rpm for 9 h by inoculating 100 mL of TGY medium (pH 6.8), consisting of 0.1% glucose, 0.5% tryptone, 0.1% K2HPO4, 0.5% yeast extract. All duplicate batch experiments were conducted in 100 mL bottles with different inoculum ratios of Clostridium butyricum and Enterobater aerogenes (C:E) using 5x diluted rich media (GP) consisting of 2 g/L glucose, 4g/L polypeptone, 0.4 g/L KH2PO4, 0.1 g/L yeast extract, 0.1 MgSO4.7H2O. The range of inoculum ratio of C. butyricum to E. aerogenes were 2:1,4:1,8:1, 1:2,1:4, 1:8, 1:0, 0:1. Using glucose as a carbon source aided in the observation of microbial behavior as well as making the effect of inoculum ratio more evident. Nearly all the glucose in the medium was used to produce hydrogen, except at a 1:0 ratio of inoculum (i.e. containing only C. butyricum). Low glucose consumption leads to a higher hydrogen yield due to cumulative hydrogen production and consumption of glucose, but not as much as C:E, 8:1. The lowest hydrogen yield was achieved in 1:8 inoculum ratio of C:E, 71.9 mL, 1.007±0.01 mol H2/mol glucose and the highest cumulative hydrogen, hydrogen yield and dry cell weight were achieved in 8:1 inoculum ratio of C:E, 117.4 mL, 2.035±0.082 mol H2/mol glucose, 0.4 g/L respectively. In this study effect of inoculum ratio on dark fermentative biohydrogen production using C. butyricum and E. aerogenes was investigated. The maximum hydrogen yield of 2.035mol H2/mol glucose was obtained using 2g/L glucose, an initial pH of 6 and an inoculum ratio of C. butyricum to E. aerogenes of 8:1. Results showed that inoculum ratio is an important parameter on hydrogen production due to competition between the two microorganisms in using substrate for growth and production of by-products. The results presented here could be of great significance for further waste management studies using co-culture hydrogen production.

Keywords: biohydrogen, Clostridium butyricum, dark fermentation, Enterobacter aerogenes, inoculum ratio in biohydrogen production

Procedia PDF Downloads 234
1340 Implementation of Elliptic Curve Cryptography Encryption Engine on a FPGA

Authors: Mohamad Khairi Ishak

Abstract:

Conventional public key crypto systems such as RSA (Ron Rivest, Adi Shamir and Leonard Adleman), DSA (Digital Signature Algorithm), and Elgamal are no longer efficient to be implemented in the small, memory constrained devices. Elliptic Curve Cryptography (ECC), which allows smaller key length as compared to conventional public key crypto systems, has thus become a very attractive choice for many applications. This paper describes implementation of an elliptic curve cryptography (ECC) encryption engine on a FPGA. The system has been implemented in 2 different key sizes, which are 131 bits and 163 bits. Area and timing analysis are provided for both key sizes for comparison. The crypto system, which has been implemented on Altera’s EPF10K200SBC600-1, has a hardware size of 5945/9984 and 6913/9984 of logic cells for 131 bits implementation and 163 bits implementation respectively. The crypto system operates up to 43 MHz, and performs point multiplication operation in 11.3 ms for 131 bits implementation and 14.9 ms for 163 bits implementation. In terms of speed, our crypto system is about 8 times faster than the software implementation of the same system.

Keywords: elliptic curve cryptography, FPGA, key sizes, memory

Procedia PDF Downloads 317
1339 Parameters Optimization of the Laminated Composite Plate for Sound Transmission Problem

Authors: Yu T. Tsai, Jin H. Huang

Abstract:

In this paper, the specific sound transmission loss (TL) of the laminated composite plate (LCP) with different material properties in each layer is investigated. The numerical method to obtain the TL of the LCP is proposed by using elastic plate theory. The transfer matrix approach is novelty presented for computational efficiency in solving the numerous layers of dynamic stiffness matrix (D-matrix) of the LCP. Besides the numerical simulations for calculating the TL of the LCP, the material properties inverse method is presented for the design of a laminated composite plate analogous to a metallic plate with a specified TL. As a result, it demonstrates that the proposed computational algorithm exhibits high efficiency with a small number of iterations for achieving the goal. This method can be effectively employed to design and develop tailor-made materials for various applications.

Keywords: sound transmission loss, laminated composite plate, transfer matrix approach, inverse problem, elastic plate theory, material properties

Procedia PDF Downloads 385
1338 Application of Artificial Neural Network for Prediction of High Tensile Steel Strands in Post-Tensioned Slabs

Authors: Gaurav Sancheti

Abstract:

This study presents an impacting approach of Artificial Neural Networks (ANNs) in determining the quantity of High Tensile Steel (HTS) strands required in post-tensioned (PT) slabs. Various PT slab configurations were generated by varying the span and depth of the slab. For each of these slab configurations, quantity of required HTS strands were recorded. ANNs with backpropagation algorithm and varying architectures were developed and their performance was evaluated in terms of Mean Square Error (MSE). The recorded data for the quantity of HTS strands was used as a feeder database for training the developed ANNs. The networks were validated using various validation techniques. The results show that the proposed ANNs have a great potential with good prediction and generalization capability.

Keywords: artificial neural networks, back propagation, conceptual design, high tensile steel strands, post tensioned slabs, validation techniques

Procedia PDF Downloads 221
1337 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle

Authors: Ryan Messina, Mehedi Hasan

Abstract:

This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.

Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking

Procedia PDF Downloads 203
1336 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 370
1335 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 225
1334 Design and Motion Control of a Two-Wheel Inverted Pendulum Robot

Authors: Shiuh-Jer Huang, Su-Shean Chen, Sheam-Chyun Lin

Abstract:

Two-wheel inverted pendulum robot (TWIPR) is designed with two-hub DC motors for human riding and motion control evaluation. In order to measure the tilt angle and angular velocity of the inverted pendulum robot, accelerometer and gyroscope sensors are chosen. The mobile robot’s moving position and velocity were estimated based on DC motor built in hall sensors. The control kernel of this electric mobile robot is designed with embedded Arduino Nano microprocessor. A handle bar was designed to work as steering mechanism. The intelligent model-free fuzzy sliding mode control (FSMC) was employed as the main control algorithm for this mobile robot motion monitoring with different control purpose adjustment. The intelligent controllers were designed for balance control, and moving speed control purposes of this robot under different operation conditions and the control performance were evaluated based on experimental results.

Keywords: balance control, speed control, intelligent controller, two wheel inverted pendulum

Procedia PDF Downloads 223
1333 Research on Development and Accuracy Improvement of an Explosion Proof Combustible Gas Leak Detector Using an IR Sensor

Authors: Gyoutae Park, Seungho Han, Byungduk Kim, Youngdo Jo, Yongsop Shim, Yeonjae Lee, Sangguk Ahn, Hiesik Kim, Jungil Park

Abstract:

In this paper, we presented not only development technology of an explosion proof type and portable combustible gas leak detector but also algorithm to improve accuracy for measuring gas concentrations. The presented techniques are to apply the flame-proof enclosure and intrinsic safe explosion proof to an infrared gas leak detector at first in Korea and to improve accuracy using linearization recursion equation and Lagrange interpolation polynomial. Together, we tested sensor characteristics and calibrated suitable input gases and output voltages. Then, we advanced the performances of combustible gaseous detectors through reflecting demands of gas safety management fields. To check performances of two company's detectors, we achieved the measurement tests with eight standard gases made by Korea Gas Safety Corporation. We demonstrated our instruments better in detecting accuracy other than detectors through experimental results.

Keywords: accuracy improvement, IR gas sensor, gas leak, detector

Procedia PDF Downloads 390
1332 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 262
1331 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 256
1330 Sports Activities and their Impact on Disability

Authors: Ajved Ahmed

Abstract:

This research paper explores the intricate relationship between sports activities and disability, aiming to shed light on the multifaceted impacts of sports participation on individuals with disabilities. As the world grapples with the challenges posed by the growing population of people with disabilities, understanding the role of sports in their lives becomes increasingly important. The paper begins by providing a comprehensive overview of the diverse forms of disabilities, emphasizing the wide spectrum of physical, sensory, and cognitive impairments. It then delves into the benefits of sports activities for individuals with disabilities, highlighting the profound physical, psychological, and social advantages that engagement in sports can offer. These benefits encompass improved physical fitness, enhanced self-esteem and mental well-being, increased social integration, and a sense of empowerment and independence. Furthermore, the paper examines the barriers and challenges that individuals with disabilities often encounter when attempting to participate in sports activities, ranging from inaccessible facilities to societal prejudices and stereotypes. It underscores the critical role of inclusive sports programs, adaptive equipment, and policy initiatives in overcoming these barriers and fostering an environment where everyone can enjoy the benefits of sports. Through a comprehensive review of existing research and case studies, the paper also explores specific sports and their suitability for various types of disabilities. It discusses adapted sports like wheelchair basketball, blind soccer, and para-swimming, showcasing how these tailored activities not only accommodate disabilities but also promote excellence and competition at the highest levels. Additionally, the research paper delves into the economic and societal implications of increased sports participation among individuals with disabilities. It explores the potential for greater inclusion in the workforce, reduced healthcare costs, and the fostering of a more inclusive and accepting society. This research paper underscores the profound impact of sports activities on individuals with disabilities, highlighting their potential to improve physical health, mental well-being, and social integration. It calls for continued efforts to break down barriers and promote inclusive sports programs to ensure that everyone, regardless of their abilities, can access the transformative power of sports. Ultimately, this study contributes to a broader understanding of disability and sports, emphasizing the importance of inclusivity and accessibility in creating a more equitable and healthier society.

Keywords: sports and health, sports and disability, curing disability through sports, health benefits of sports

Procedia PDF Downloads 62
1329 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 282
1328 Numerical Model for Investigation of Recombination Mechanisms in Graphene-Bonded Perovskite Solar Cells

Authors: Amir Sharifi Miavaghi

Abstract:

It is believed recombination mechnisms in graphene-bonded perovskite solar cells based on numerical model in which doped-graphene structures are employed as anode/cathode bonding semiconductor. Moreover, th‌‌‌‌e da‌‌‌‌‌rk-li‌‌‌‌‌ght c‌‌‌‌urrent d‌‌‌‌ens‌‌‌‌ity-vo‌‌‌‌‌‌‌ltage density-voltage cu‌‌‌‌‌‌‌‌‌‌‌rves are investigated by regression analysis. L‌‌‌oss m‌‌‌‌echa‌‌‌‌nisms suc‌‌‌h a‌‌‌‌‌‌s ba‌‌‌‌ck c‌‌‌ontact b‌‌‌‌‌arrier, d‌‌‌‌eep surface defect i‌‌‌‌n t‌‌‌‌‌‌‌he adsorbent la‌‌‌yer is det‌‌‌‌‌ermined b‌‌‌y adapting th‌‌‌e sim‌‌‌‌‌ulated ce‌‌‌‌‌ll perfor‌‌‌‌‌mance to t‌‌‌‌he measure‌‌‌‌ments us‌‌‌‌ing the diffe‌‌‌‌‌‌rential evolu‌‌‌‌‌tion of th‌‌‌‌e global optimization algorithm. T‌‌‌‌he performance of t‌‌‌he c‌‌‌‌ell i‌‌‌‌n the connection proc‌‌‌‌‌ess incl‌‌‌‌‌‌udes J-V cur‌‌‌‌‌‌ves that are examined at di‌‌‌‌‌fferent tempe‌‌‌‌‌‌‌ratures an‌‌‌d op‌‌‌‌en cir‌‌‌‌cuit vol‌‌‌‌tage (V) und‌‌‌‌er differ‌‌‌‌‌ent light intensities as a function of temperature. Ba‌‌‌‌sed o‌‌‌n t‌‌‌he prop‌‌‌‌osed nu‌‌‌‌‌merical mod‌‌‌‌el a‌‌‌‌nd the acquired lo‌‌‌‌ss mecha‌‌‌‌‌‌nisms, our approach can be used to improve the efficiency of the solar cell further. Due to the high demand for alternative energy sources, solar cells are good alternatives for energy storage using the photovoltaic phenomenon.

Keywords: numerical model, recombination mechanism, graphen, perovskite solarcell

Procedia PDF Downloads 67
1327 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors

Authors: V. Rashtchi, H. Bizhani, F. R. Tatari

Abstract:

This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.

Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization

Procedia PDF Downloads 630
1326 Spectral Anomaly Detection and Clustering in Radiological Search

Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk

Abstract:

Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.

Keywords: radiological search, radiological mapping, radioactivity, radiation protection

Procedia PDF Downloads 691
1325 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 338
1324 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 134
1323 The Acceptable Roles of Artificial Intelligence in the Judicial Reasoning Process

Authors: Sonia Anand Knowlton

Abstract:

There are some cases where we as a society feel deeply uncomfortable with the use of Artificial Intelligence (AI) tools in the judicial decision-making process, and justifiably so. A perfect example is COMPAS, an algorithmic model that predicts recidivism rates of offenders to assist in the determination of their bail conditions. COMPAS turned out to be extremely racist: it massively overpredicted recidivism rates of Black offenders and underpredicted recidivism rates of white offenders. At the same time, there are certain uses of AI in the judicial decision-making process that many would feel more comfortable with and even support. Take, for example, a “super-breathalyzer,” an (albeit imaginary) tool that uses AI to deliver highly detailed information about the subject of the breathalyzer test to the legal decision-makers analyzing their drunk-driving case. This article evaluates the point at which a judge’s use of AI tools begins to undermine the public’s trust in the administration of justice. It argues that the answer to this question depends on whether the AI tool is in a role in which it must perform a moral evaluation of a human being.

Keywords: artificial intelligence, judicial reasoning, morality, technology, algorithm

Procedia PDF Downloads 81
1322 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan

Abstract:

High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Keywords: RADAR, RCS, high performance computing, point scatterer model

Procedia PDF Downloads 189
1321 An Improved Mesh Deformation Method Based on Radial Basis Function

Authors: Xuan Zhou, Litian Zhang, Shuixiang Li

Abstract:

Mesh deformation using radial basis function interpolation method has been demonstrated to produce quality meshes with relatively little computational cost using a concise algorithm. However, it still suffers from the limited deformation ability, especially in large deformation. In this paper, a pre-displacement improvement is proposed to improve the problem that illegal meshes always appear near the moving inner boundaries owing to the large relative displacement of the nodes near inner boundaries. In this improvement, nodes near the inner boundaries are first associated to the near boundary nodes, and a pre-displacement based on the displacements of associated boundary nodes is added to the nodes near boundaries in order to make the displacement closer to the boundary deformation and improve the deformation capability. Several 2D and 3D numerical simulation cases have shown that the pre-displacement improvement for radial basis function (RBF) method significantly improves the mesh quality near inner boundaries and deformation capability, with little computational burden increasement.

Keywords: mesh deformation, mesh quality, background mesh, radial basis function

Procedia PDF Downloads 364
1320 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 355
1319 Emotion Expression of the Leader and Collective Efficacy: Pride and Guilt

Authors: Hsiu-Tsu Cho

Abstract:

Collective efficacy refers to a group’s sense of its capacity to complete a task successfully or to reach objectives. Little effort has been expended on investigating the relationship between the emotion expression of a leader and collective efficacy. In this study, we examined the impact of the different emotions and emotion expression of a group leader on collective efficacy and explored whether the emotion–expressive effects differed under conditions of negative and positive emotions. A total of 240 undergraduate and graduate students recruited using Facebook and posters at a university participated in this research. The participants were separated randomly into 80 groups of four persons consisting of three participants and a confederate. They were randomly assigned to one of five conditions in a 2 (pride vs. guilt) × 2 (emotion expression of group leader vs. no emotion expression of group leader) factorial design and a control condition. Each four-person group was instructed to get the reward in a group competition of solving the five-disk Tower of Hanoi puzzle and making decisions on an investment case. We surveyed the participants by employing the emotional measure revised from previous researchers and collective efficacy questionnaire on a 5-point scale. To induce an emotion of pride (or guilt), the experimenter announced whether the group performance was good enough to have a chance of getting the reward (ranking the top or bottom 20% among all groups) after group task. The leader (confederate) could either express or not express a feeling of pride (or guilt) following the instruction according to the assigned condition. To check manipulation of emotion, we added a control condition under which the experimenter revealed no results regarding group performance in maintaining a neutral emotion. One-way ANOVAs and post hoc pairwise comparisons among the three emotion conditions (pride, guilt, and control condition) involved assigning pride and guilt scores (pride: F(1,75) = 32.41, p < .001; guilt: F(1,75) = 6.75, p < .05). The results indicated that manipulations of emotion were successful. A two-way between-measures ANOVA was conducted to examine the predictions of the main effects of emotion types and emotion expression as well as the interaction effect of these two variables on collective efficacy. The experimental findings suggest that pride did not affect collective efficacy (F(1,60) = 1.90, ns.) more than guilt did and that the group leader did not motivate collective efficacy regardless of whether he or she expressed emotion (F(1,60) = .89, ns.). However, the interaction effect of emotion types and emotion expression was statistically significant (F(1,60) = 4.27, p < .05, ω2 = .066); the effects accounted for 6.6% of the variance. Additional results revealed that, under the pride condition, the leader enhanced group efficacy when expressing emotion, whereas, under the guilt condition, an expression of emotion could reduce collective efficacy. Overall, these findings challenge the assumption that the effect of expression emotion are the same on all emotions and suggest that a leader should be cautious when expressing negative emotions toward a group to avoid reducing group effectiveness.

Keywords: collective efficacy, group leader, emotion expression, pride, guilty

Procedia PDF Downloads 328
1318 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 342
1317 Compact, Lightweight, Low Cost, Rectangular Core Power Transformers

Authors: Abidin Tortum, Kubra Kocabey

Abstract:

One of the sectors where the competition is experienced at the highest level in the world is the transformer sector, and sales can be made with a limited profit margin. For this reason, manufacturers must develop cost-cutting designs to achieve higher profits. The use of rectangular cores and coils in transformer design is one of the methods that can be used to reduce costs. According to the best knowledge we have obtained, we think that we are the first company producing rectangular core power transformers in our country. BETA, to reduce the cost of this project, more compact products to reveal, as we know it to increase the alleviate and competitiveness of the product, will perform cored coil design and production rectangle for the first-time power transformers in Turkey. The transformer to be designed shall be 16 MVA, 33/11 kV voltage level. With the rectangular design of the transformer core and windings, no-load losses can be reduced. Also, the least costly transformer type is rectangular. However, short-circuit forces on rectangular windings do not affect every point of the windings in the same way. Whereas more force is applied inwards to the mid-points of the low-voltage winding, the opposite occurs in the high-voltage winding. Therefore, the windings tend to deteriorate in the event of a short circuit. While trying to reach the project objectives, the difficulties in the design should be overcome. Rectangular core transformers to be produced in our country offer a more compact structure than conventional transformers. In other words, both height and width were smaller. Thus, the reducer takes up less space in the center. Because the transformer boiler is smaller, less oil is used, and its weight is lower. Biotemp natural ester fluid is used in rectangular transformer and the cooling performance of this oil is analyzed. The cost was also reduced with the reduction of dimensions. The decrease in the amount of oil used has also increased the environmental friendliness of the developed product. Transportation costs have been reduced by reducing the total weight. The amount of carbon emissions generated during the transportation process is reduced. Since the low-voltage winding is wound with a foil winding technique, a more resistant structure is obtained against short circuit forces. No-load losses were lower due to the use of a rectangular core. The project was handled in three phases. In the first stage, preliminary research and designs were carried out. In the second stage, the prototype manufacturing of the transformer whose designs have been completed has been started. The prototype developed in the last stage has been subjected to routine, type and special tests.

Keywords: rectangular core, power transformer, transformer, productivity

Procedia PDF Downloads 119
1316 PEA Design of the Direct Control for Training Motor Drives

Authors: Abdulatif Abdulsalam Mohamed Shaban

Abstract:

This paper states that the art of Procedure Entry Array (PEA) plan with a focus on control system applications. This paper begins with an impression of PEA technology development, followed by an arrangement of design technologies, and the use of programmable description languages and system-level design tools. They allow a practical approach based on a unique model for complete engineering electronics systems. There are three main design rules are implemented in the system. These are algorithm based fine-tuning, modularity, and the control act and the architectural constraints. An overview of contributions and limits of PEAs is also given, followed by a short survey of PEA-based gifted controllers for recent engineering systems. Finally, two complete and timely case studies are presented to illustrate the benefits of a PEA implementation when using the proposed system modelling and devise attitude. These consist of the direct control for training motor drives and the control of a diesel-driven stand-alone generator with the help of logical design.

Keywords: control (DC), engineering electronics systems, training motor drives, procedure entry array

Procedia PDF Downloads 513
1315 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 251